07 - Object detection - how to deploy a model in ROS

Robotics I

Poznan University of Technology, Institute of Robotics and Machine Intelligence

Laboratory 7 - Object detection - how to deploy a model in ROS

Goals

The objectives of this laboratory are to:

Resources

Model deployment

The model trained in the previous laboratory can be used to detect objects in a camera stream. In this laboratory, you will learn how to deploy the model in ROS and run inference on a camera stream.

And remember, where your code meets the real world… and immediately finds a bug you missed! πŸš€ πŸš€ πŸš€


It is just a meme :)

Note: Everything we do today should be done inside the container!

πŸ’₯ πŸ’₯ πŸ’₯ Task πŸ’₯ πŸ’₯ πŸ’₯

In this task, you will deploy a trained object detection model in ROS.

Requirements

A graphics processing unit (GPU) is required to train the object detection model. If you don’t have an NVIDIA GPU, you can use the CPU version of the container, but the training process will be very slow.

Preparation

  1. Download example ROSBAGs files:
bash src/robotics_object_detection/scripts/download_example_bags.bash

The script generates the data directory with the following ROSBAG files:

  1. Check available topics in ROSBAG files:
ros2 bag info data/bag_car_chase

The information will be useful for the next steps.

  1. Download the trained model.
bash src/robotics_object_detection/scripts/download_model_weights.bash

It downloads the yolo11n.pt model weights into the src/robotics_object_detection/weights/ directory. If you want to use your own model, place the model weights from previous laboratory in the same directory.

Load the model in ROS and run inference

  1. Open the src/robotics_object_detection/robotics_object_detection/detect_results_publisher.py file. All instructions are in the file.

  2. Find the class constructor of DetectResultsPublisher. Create a subscription object to the reade compressed image from a topic.

Note: The goal of the class is to capture the image from the topic, encode it as a numpy array, and run inference on the image.

The self.image_callback function decodes the image, and next, it calls three functions:

  1. Find the self.infer_image function to run inference on the image. It uses the ultralytics library, you can find the documentation here. Detailed instructions are in the function definition.

  2. Find the self.publish_results function to visualize the results. It uses the OpenCV library to draw the bounding boxes on the image. Detailed instructions are in the function definition.

  3. Build the package:

colcon build --packages-select robotics_object_detection
  1. Open three terminals in total, attach them to the container, source the workspace using source install/setup.bash.

Terminal 1 - play the ROSBAG file (in infinite loop):

ros2 bag play data/bag_car_chase --loop

Terminal 2 - run the RQT node to visualize the image (RVIZ can not work with compressed images):

ros2 run rqt_image_view rqt_image_view

In the rqt_image_view window, select the topic with the source image.

Terminal 3 - run the detection node:

ros2 run robotics_object_detection detect_results_publisher

The node subscribes to the topic with the source image, runs inference on the image, and publishes the results to the topic. The results are visualized in the rqt_image_view window - change the topic to /camera/detected_image/compressed.

Take a screenshot of the detection results visualised in rqt_image_view on bag_car_chase.

The image should look like this:

Note: Screenshot does not have to be identical ;)


An example object detection image. Source: Own materials

  1. Stop the ROSBAG file and the detection node.

Save results to a text file

  1. Open the src/robotics_object_detection/robotics_object_detection/detect_results_publisher.py file.

  2. Find the self.write_results function. As you can see, the function just calls the write_as_csv method.

  3. Find the definition of the write_as_csv method in the DetectResults class. Modify the function to write the results to a txt file in the defined format.

  4. Build the package:

colcon build --packages-select robotics_object_detection
  1. Open two terminals in total, attach them to the container, source the workspace using source install/setup.bash.

Terminal 1 - run the detection node:

ros2 run robotics_object_detection detect_results_publisher

Note: The node has to be run before the ROSBAG file.

Terminal 2 - play the ROSBAG file (just once!):

ros2 bag play data/bag_overpass
  1. When the ROSBAG file ends, stop the detection node. It generates the detection_results.txt file in your workspace.

The file should look like this:

Note: The file does not have to be identical to the screenshot ;)


An example file content. Source: Own materials

πŸ’₯ πŸ’₯ πŸ’₯ Assignment πŸ’₯ πŸ’₯ πŸ’₯

To pass the course, you need to upload the following files to the eKursy platform: